23 research outputs found

    Analysis of circuit imperfections in BosonSampling

    Full text link
    BosonSampling is a problem where a quantum computer offers a provable speedup over classical computers. Its main feature is that it can be solved with current linear optics technology, without the need for a full quantum computer. In this work, we investigate whether an experimentally realistic BosonSampler can really solve BosonSampling without any fault-tolerance mechanism. More precisely, we study how the unavoidable errors linked to an imperfect calibration of the optical elements affect the final result of the computation. We show that the fidelity of each optical element must be at least 1O(1/n2)1 - O(1/n^2), where nn refers to the number of single photons in the scheme. Such a requirement seems to be achievable with state-of-the-art equipment.Comment: 20 pages, 7 figures, v2: new title, to appear in QI

    Regimes of classical simulability for noisy Gaussian boson sampling

    Full text link
    As a promising candidate for exhibiting quantum computational supremacy, Gaussian Boson Sampling (GBS) is designed to exploit the ease of experimental preparation of Gaussian states. However, sufficiently large and inevitable experimental noise might render GBS classically simulable. In this work, we formalize this intuition by establishing a sufficient condition for approximate polynomial-time classical simulation of noisy GBS --- in the form of an inequality between the input squeezing parameter, the overall transmission rate and the quality of photon detectors. Our result serves as a non-classicality test that must be passed by any quantum computationalsupremacy demonstration based on GBS. We show that, for most linear-optical architectures, where photon loss increases exponentially with the circuit depth, noisy GBS loses its quantum advantage in the asymptotic limit. Our results thus delineate intermediate-sized regimes where GBS devices might considerably outperform classical computers for modest noise levels. Finally, we find that increasing the amount of input squeezing is helpful to evade our classical simulation algorithm, which suggests a potential route to mitigate photon loss.Comment: 13 pages, 4 figures, final version accepted for publication in Physical Review Letter

    Enhancing quantum entanglement by photon addition and subtraction

    Full text link
    The non-Gaussian operations effected by adding or subtracting a photon on the entangled optical beams emerging from a parametric down-conversion process have been suggested to enhance entanglement. Heralded photon addition or subtraction is, as a matter of fact, at the heart of continuous-variable entanglement distillation. The use of such processes has recently been experimentally demonstrated in the context of the generation of optical coherent-state superpositions or the verification of the canonical commutation relations. Here, we carry out a systematic study of the effect of local photon additions or subtractions on a two-mode squeezed vacuum state, showing that the entanglement generally increases with the number of such operations. This is analytically proven when additions or subtractions are restricted to one mode only, while we observe that the highest entanglement is achieved when these operations are equally shared between the two modes. We also note that adding photons typically provides a stronger entanglement enhancement than subtracting photons, while photon subtraction performs better in terms of energy efficiency. Furthermore, we analyze the interplay between entanglement and non-Gaussianity, showing that it is more subtle than previously expected.Comment: 10 pages, 6 figure

    Simulating boson sampling in lossy architectures

    Get PDF
    Photon losses are among the strongest imperfections affecting multi-photon interference. Despite their importance, little is known about their effect on boson sampling experiments. In this work we show that using classical computers, one can efficiently simulate multi-photon interference in all architectures that suffer from an exponential decay of the transmission with the depth of the circuit, such as integrated photonic circuits or optical fibers. We prove that either the depth of the circuit is large enough that it can be simulated by thermal noise with an algorithm running in polynomial time, or it is shallow enough that a tensor network simulation runs in quasi-polynomial time. This result suggests that in order to implement a quantum advantage experiment with single-photons and linear optics new experimental platforms may be needed

    Quantum advantage from energy measurements of many-body quantum systems

    Get PDF
    The problem of sampling outputs of quantum circuits has been proposed as a candidate for demonstrating a quantum computational advantage (sometimes referred to as quantum "supremacy"). In this work, we investigate whether quantum advantage demonstrations can be achieved for more physically-motivated sampling problems, related to measurements of physical observables. We focus on the problem of sampling the outcomes of an energy measurement, performed on a simple-to-prepare product quantum state -- a problem we refer to as energy sampling. For different regimes of measurement resolution and measurement errors, we provide complexity theoretic arguments showing that the existence of efficient classical algorithms for energy sampling is unlikely. In particular, we describe a family of Hamiltonians with nearest-neighbour interactions on a 2D lattice that can be efficiently measured with high resolution using a quantum circuit of commuting gates (IQP circuit), whereas an efficient classical simulation of this process should be impossible. In this high resolution regime, which can only be achieved for Hamiltonians that can be exponentially fast-forwarded, it is possible to use current theoretical tools tying quantum advantage statements to a polynomial-hierarchy collapse whereas for lower resolution measurements such arguments fail. Nevertheless, we show that efficient classical algorithms for low-resolution energy sampling can still be ruled out if we assume that quantum computers are strictly more powerful than classical ones. We believe our work brings a new perspective to the problem of demonstrating quantum advantage and leads to interesting new questions in Hamiltonian complexity.Comment: Comments are welcom

    Loophole-free test of quantum non-locality using high-efficiency homodyne detectors

    Full text link
    We provide a detailed analysis of the recently proposed setup for a loophole-free test of Bell inequality using conditionally generated non-Gaussian states of light and balanced homodyning. In the proposed scheme, a two-mode squeezed vacuum state is de-gaussified by subtracting a single photon from each mode with the use of an unbalanced beam splitter and a standard low-efficiency single-photon detector. We thoroughly discuss the dependence of the achievable Bell violation on the various relevant experimental parameters such as the detector efficiencies, the electronic noise and the mixedness of the initial Gaussian state. We also consider several alternative schemes involving squeezed states, linear optical elements, conditional photon subtraction and homodyne detection.Comment: 13 pages, 14 figures, RevTeX

    Security of continuous-variable quantum key distribution against general attacks

    Full text link
    We prove the security of Gaussian continuous-variable quantum key distribution against arbitrary attacks in the finite-size regime. The novelty of our proof is to consider symmetries of quantum key distribution in phase space in order to show that, to good approximation, the Hilbert space of interest can be considered to be finite-dimensional, thereby allowing for the use of the postselection technique introduced by Christandl, Koenig and Renner (Phys. Rev. Lett. 102, 020504 (2009)). Our result greatly improves on previous work based on the de Finetti theorem which could not provide security for realistic, finite-size, implementations.Comment: 5 pages, plus 11 page appendi

    Experimental implementation of non-Gaussian attacks on a continuous-variable quantum key distribution system

    Get PDF
    An intercept-resend attack on a continuous-variable quantum-key-distribution protocol is investigated experimentally. By varying the interception fraction, one can implement a family of attacks where the eavesdropper totally controls the channel parameters. In general, such attacks add excess noise in the channel, and may also result in non-Gaussian output distributions. We implement and characterize the measurements needed to detect these attacks, and evaluate experimentally the information rates available to the legitimate users and the eavesdropper. The results are consistent with the optimality of Gaussian attacks resulting from the security proofs.Comment: 4 pages, 5 figure
    corecore